0%

(2018) GANomaly:Semi-Supervised Anomaly Detection via Adversarial Training

Akcay S, Atapour-Abarghouei A, Breckon T P. GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training[J]. 2018.



1. Overview


In this paper, it proposes a novel anomaly detection model (output anomaly score)

  • Conditional GAN
  • jointly learn the generation of high-dimensional image space and the inference of latent space
  • only trained on nromal samples

1.1. Problem Definition



  • large training set. M normal samples


  • smaller testing set. N samples (normal + abnormal)


  • model f. learn the normal data distribution and minimizes the output anomaly score
  • phi. threshold (set to 0.2)



2. Methods


2.1. Model




2.1.1. For Abnormal Image

  • G_D is not able to reconstruct the abnormalities
    • the network is modeled only on normal samples during training and its parametrization is not suitable for generating abnormal samples
    • An output X^ that has missed abnormalities can lead to the encoder network R mapping X^ to a vector z^ that has also missed abnormal feature representation, causing dissimilarity between z and z^

2.2. Loss Function





3. Experiments


3.1. Dataset

  • MNIST (32x32). treating one class being an anomaly, while other as normal class
  • CIFAR10. one as abnormal class
  • University Baggage Anomaly Dataset (UBA) (64x64). abnormal class: knife, gun and gun componen

3.2. Setting

  • λ=50
  • Metric. AUC, ROC and TPR (true positive rate), FPR

3.3. Comparison





  • Table 1. all approaches show very poor performance for detecting digit 1 as abnormal. This is probably due to the linear shape simplicity of this class such that any model can easily overfit to the data
  • Table 2. The reason for getting relatively lower quantitative results within this dataset is that for a selected abnormal category, there exists a normal class the is similar to the abnormal (plane vs bird, cat vs dog)


3.4. Speed